14 research outputs found

    Understanding Slow Feature Analysis: A Mathematical Framework

    Get PDF
    Slow feature analysis is an algorithm for unsupervised learning of invariant representations from data with temporal correlations. Here, we present a mathematical analysis of slow feature analysis for the case where the input-output functions are not restricted in complexity. We show that the optimal functions obey a partial differential eigenvalue problem of a type that is common in theoretical physics. This analogy allows the transfer of mathematical techniques and intuitions from physics to concrete applications of slow feature analysis, thereby providing the means for analytical predictions and a better understanding of simulation results. We put particular emphasis on the situation where the input data are generated from a set of statistically independent sources.\ud The dependence of the optimal functions on the sources is calculated analytically for the cases where the sources have Gaussian or uniform distribution

    An Extension of Slow Feature Analysis for Nonlinear Blind Source Separation

    Get PDF
    We present and test an extension of slow feature analysis as a novel approach to nonlinear blind source separation. The algorithm relies on temporal correlations and iteratively reconstructs a set of statistically independent sources from arbitrary nonlinear instantaneous mixtures. Simulations show that it is able to invert a complicated nonlinear mixture of two audio signals with a reliability of more than 9090\%. The algorithm is based on a mathematical analysis of slow feature analysis for the case of input data that are generated from statistically independent sources

    How to Solve Classification and Regression Problems on High-Dimensional Data with a Supervised Extension of Slow Feature Analysis

    Get PDF
    Supervised learning from high-dimensional data, e.g., multimedia data, is a challenging task. We propose an extension of slow feature analysis (SFA) for supervised dimensionality reduction called graph-based SFA (GSFA). The algorithm extracts a label-predictive low-dimensional set of features that can be post-processed by typical supervised algorithms to generate the ïŹnal label or class estimation. GSFA is trained with a so-called training graph, in which the vertices are the samples and the edges represent similarities of the corresponding labels. A new weighted SFA optimization problem is introduced, generalizing the notion of slowness from sequences of samples to such training graphs. We show that GSFA computes an optimal solution to this problem in the considered function space, and propose several types of training graphs. For classiïŹcation, the most straightforward graph yields features equivalent to those of (nonlinear) Fisher discriminant analysis. Emphasis is on regression, where four different graphs were evaluated experimentally with a subproblem of face detection on photographs. The method proposed is promising particularly when linear models are insufficient, as well as when feature selection is difficult

    Slowness and Sparseness Lead to Place, Head-Direction, and Spatial-View Cells

    Get PDF
    We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA laye

    From Grids to Places

    Get PDF
    Hafting et al. (2005) described grid cells in the dorsocaudal region of the medial enthorinal cortex (dMEC). These cells show a strikingly regular grid-like firing-pattern as a function of the position of a rat in an enclosure. Since the dMEC projects to the hippocampal areas containing the well-known place cells, the question arises whether and how the localized responses of the latter can emerge based on the output of grid cells. Here, we show that, starting with simulated grid-cells, a simple linear transformation maximizing sparseness leads to a localized representation similar to place fields

    Modeling place field activity with hierarchical slow feature analysis

    No full text
    What are the computational laws of hippocampal activity? In this paper we argue for the slowness principle as a fundamental processing paradigm behind hippocampal place cell firing. We present six different studies from the experimental literature, performed with real-life rats, that we replicated in computer simulations. Each of the chosen studies allows rodents to develop stable place fields and then examines a distinct property of the established spatial encoding: adaptation to cue relocation and removal; directional dependent firing in the linear track and open field; and morphing and scaling the environment itself. Simulations are based on a hierarchical Slow Feature Analysis (SFA) network topped by a principal component analysis (ICA) output layer. The slowness principle is shown to account for the main findings of the presented experimental studies. The SFA network generates its responses using raw visual input only, which adds to its biological plausibility but requires experiments performed in light conditions. Future iterations of the model will thus have to incorporate additional information, such as path integration and grid cell activity, in order to be able to also replicate studies that take place during darkness

    Memory storage fidelity in the hippocampal circuit

    No full text
    In the last decades a standard model regarding the function of the hippocampus in memory formation has been established and tested computationally. It has been argued that the CA3 region works as an auto-associative memory and that its recurrent fibers are the actual storing place of the memories. Furthermore, to work properly CA3 requires memory patterns that are mutually uncorrelated. It has been suggested that the dentate gyrus orthogonalizes the patterns before storage, a process known as pattern separation. In this study we review the model when random input patterns are presented for storage and investigate whether it is capable of storing patterns of more realistic entorhinal grid cell input. Surprisingly, we find that an auto-associative CA3 net is redundant for random inputs up to moderate noise levels and is only beneficial at high noise levels. When grid cell input is presented, auto-association is even harmful for memory performance at all levels. Furthermore, we find that Hebbian learning in the dentate gyrus does not support its function as a pattern separator. These findings challenge the standard framework and support an alternative view where the simpler EC-CA1-EC network is sufficient for memory storage

    Improving sensory representations using episodic memory

    No full text
    The medial temporal lobe (MTL) is well known to be essential for declarative memory. However, a growing body of research suggests that MTL structures might be involved in perceptual processes as well. Our previous modeling work suggests that sensory representations in cortex influence the accuracy of episodic memory retrieved from the MTL. We adopt that model here to show that, conversely, episodic memory can also influence the quality of sensory representations. We model the effect of episodic memory as (a) repeatedly replaying episodes from memory and (b) recombining episode fragments to form novel sequences that are more informative for learning sensory representations than the original episodes. We demonstrate that the performance in visual discrimination tasks is superior when episodic memory is present and that this difference is due to episodic memory driving the learning of a more optimized sensory representation. We conclude that the MTL can, even if it has only a purely mnemonic function, influence perceptual discrimination indirectly

    Gaussian-binary restricted Boltzmann machines for modeling natural image statistics

    No full text
    We present a theoretical analysis of Gaussian-binary restricted Boltzmann machines (GRBMs) from the perspective of density models. The key aspect of this analysis is to show that GRBMs can be formulated as a constrained mixture of Gaussians, which gives a much better insight into the model’s capabilities and limitations. We further show that GRBMs are capable of learning meaningful features without using a regularization term and that the results are comparable to those of independent component analysis. This is illustrated for both a two-dimensional blind source separation task and for modeling natural image patches. Our findings exemplify that reported difficulties in training GRBMs are due to the failure of the training algorithm rather than the model itself. Based on our analysis we derive a better training setup and show empirically that it leads to faster and more robust training of GRBMs. Finally, we compare different sampling algorithms for training GRBMs and show that Contrastive Divergence performs better than training methods that use a persistent Markov chain

    Spatial representations of place cells in darkness are supported by path integration and border information

    No full text
    Effective spatial navigation is enabled by reliable reference cues that derive from sensory information from the external environment, as well as from internal sources such as the vestibular system. The integration of information from these sources enables dead reckoning in the form of path integration. Navigation in the dark is associated with the accumulation of errors in terms of perception of allocentric position and this may relate to error accumulation in path integration. We assessed this by recording from place cells in the dark under circumstances where spatial sensory cues were suppressed. Spatial information content, spatial coherence, place field size, and peak and infield firing rates decreased whereas sparsity increased following exploration in the dark compared to the light. Nonetheless it was observed that place field stability in darkness was sustained by border information in a subset of place cells. To examine the impact of encountering the environment’s border on navigation, we analyzed the trajectory and spiking data gathered during navigation in the dark. Our data suggest that although error accumulation in path integration drives place field drift in darkness, under circumstances where border contact is possible, this information is integrated to enable retention of spatial representations
    corecore